Skip to content

refactor(scheduler): migrate NodeInfo, PodInfo, and plugin resource r…#1146

Open
enoodle wants to merge 5 commits intomainfrom
erez/migrate-node-info-pod-info-to-vectors
Open

refactor(scheduler): migrate NodeInfo, PodInfo, and plugin resource r…#1146
enoodle wants to merge 5 commits intomainfrom
erez/migrate-node-info-pod-info-to-vectors

Conversation

@enoodle
Copy link
Collaborator

@enoodle enoodle commented Mar 5, 2026

…eads to vectors

Description

Convert all read-path methods from Resource to ResourceVector operations:

  • NodeInfo: IsTaskAllocatable, FittingError, GetSumOfIdleGPUs, IsCPUOnlyNode
  • External plugins: proportion, topology, nodeplacement, nodeavailability, resourcetype
  • Framework: session logging, statement references
  • Error handling: pod_errors, job_errors

Add AcceptedResourceVector to PodInfo. Add QuantifyVector util to proportion plugin. Rewrite topology calcNodeAccommodation from iterative pod probing to division-based vector approach.

Resource fields still maintained via dual-write for backward compatibility until removal in subsequent commits.

Related Issues

Fixes #

Checklist

  • Self-reviewed
  • Added/updated tests (if needed)
  • Updated documentation (if needed)

Summary by CodeRabbit

Release Notes

  • Refactor
    • Modernized resource tracking infrastructure by transitioning from scalar resource representations to vector-based resource tracking, enabling more precise allocation of GPUs, CPUs, memory, and custom resources.
    • Enhanced resource requirement comparisons and capacity calculations throughout the scheduler to use vector operations.
    • Updated error reporting for resource-related failures to provide more detailed, vector-aware diagnostics.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 5, 2026

Important

Review skipped

Auto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 45ded0d3-3a3a-42b9-933e-8bef0f4fb484

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

The pull request refactors the scheduler's resource representation system, migrating from scalar Resource pointers to vector-based ResourceVector value types with an accompanying ResourceVectorMap for indexed access. This systematic change affects resource allocation, topology calculations, error handling, and logging throughout multiple scheduler components and plugins.

Changes

Cohort / File(s) Summary
Core Resource Representation & Error Handling
pkg/scheduler/api/common_info/job_errors.go, pkg/scheduler/api/common_info/job_errors_test.go, pkg/scheduler/api/common_info/pod_errors.go, pkg/scheduler/api/common_info/pod_errors_test.go
Updated error constructors to accept ResourceVector and ResourceVectorMap instead of scalar Resource pointers. NewTopologyInsufficientResourcesError and NewFitErrorInsufficientResource now use vector-based resource parameters with indexed access for GPU, CPU, memory, and MIG resources. Test fixtures updated to use ToVector(vectorMap) for resource construction.
Node & Pod Info Structures
pkg/scheduler/api/node_info/node_info.go, pkg/scheduler/api/node_info/node_info_test.go, pkg/scheduler/api/pod_info/pod_info.go
Introduced AllocatableVector, IdleVector, ReleasingVector, and VectorMap fields to NodeInfo and AcceptedResourceVector to PodInfo. Updated initialization, cloning, and SetVectorMap logic to maintain vector representations. lessEqualTaskToNodeResources and isTaskAllocatableOnNonAllocatedResources now accept ResourceVector parameters.
GPU Sharing & Node Availability
pkg/scheduler/api/node_info/gpu_sharing_node_info.go, pkg/scheduler/cache/cluster_info/cluster_info.go, pkg/scheduler/plugins/nodeavailability/nodeavailability.go, pkg/scheduler/plugins/resourcetype/resourcetype.go
Replaced scalar GPU counting (Idle.GPUs(), Used.GPUs()) with vector-indexed access (IdleVector.Get(gpuIdx), UsedVector.Get(gpuIdx)) using VectorMap.GetIndex("gpu"). GPU resource accounting now fully vector-based.
Scheduler Framework & Session
pkg/scheduler/framework/session.go, pkg/scheduler/framework/statement.go
Updated GPU resource accounting and logging to use vector-based representations (IdleVector, UsedVector, ReleasingVector, ResReqVector). Replaced scalar Idle/Releasing/Used counts with indexed vector access.
Node Placement Plugins
pkg/scheduler/plugins/nodeplacement/pack.go, pkg/scheduler/plugins/nodeplacement/spread.go, pkg/scheduler/plugins/nodeplacement/nodepack_test.go, pkg/scheduler/plugins/nodeplacement/nodespread_test.go
Replaced Allocatable.Get(resourceName) with AllocatableVector.Get(node.VectorMap.GetIndex(string(resourceName))) for resource availability checks. Test setup now initializes AllocatableVector, IdleVector, ReleasingVector, and VectorMap.
Capacity & Proportion Plugins
pkg/scheduler/plugins/proportion/capacity_policy/capacity_policy.go, pkg/scheduler/plugins/proportion/capacity_policy/capacity_policy_test.go, pkg/scheduler/plugins/proportion/proportion.go, pkg/scheduler/plugins/proportion/proportion_test.go
Replaced per-task resource accounting with vector-based representation. QuantifyVector utility introduced for vector quantification. Refactored totalVictimsResources to store ResourceVector slices. Updated quota extraction and accumulation to use vector operations.
Reclaimability & Strategies
pkg/scheduler/plugins/proportion/reclaimable/reclaimer_info.go, pkg/scheduler/plugins/proportion/reclaimable/reclaimable.go, pkg/scheduler/plugins/proportion/reclaimable/reclaimable_test.go, pkg/scheduler/plugins/proportion/reclaimable/strategies/strategies.go, pkg/scheduler/plugins/proportion/reclaimable/strategies/strategies_test.go
Updated ReclaimerInfo.RequiredResources to use ResourceVector and added VectorMap field. Reclaimable method signatures now accept ResourceVector and vectorMap. Updated all quota calculations to use QuantifyVector with vector map.
Utility Functions
pkg/scheduler/plugins/proportion/utils/utils.go
Added QuantifyVector function to convert ResourceVector to ResourceQuantities using a ResourceVectorMap for index-based resource extraction.
Topology Plugin
pkg/scheduler/plugins/topology/job_filtering.go, pkg/scheduler/plugins/topology/job_filtering_test.go, pkg/scheduler/plugins/topology/node_scoring_test.go, pkg/scheduler/plugins/topology/topology_plugin.go, pkg/scheduler/plugins/topology/topology_structs.go
Extensive refactoring replacing scalar Resource with ResourceVector throughout topology allocation logic. Updated getTasksAllocationMetadata, getJobAllocatableDomains, checkJobDomainFit, and related methods to use vector-based resources. DomainInfo.IdleOrReleasingResources replaced with IdleOrReleasingVector. Info struct now includes VectorMap field.
Scheduler Actions & Tests
pkg/scheduler/actions/common/allocate.go, pkg/scheduler/actions/common/feasible_nodes_test.go, pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus.go, pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus_test.go, pkg/scheduler/actions/common/solvers/pod_scenario_builder_test.go
Updated log messages to use ResReqVector instead of ResReq. Test initialization now includes ResourceVectorMap setup and vector initialization for nodes. Updated GPU accounting to use vector-indexed access.
Job Ordering Utilities
pkg/scheduler/actions/utils/job_order_by_queue_test.go
Updated test fixtures to include VectorMap, AcceptedResourceVector, and AllocatedVector alongside or replacing scalar resource fields. Test data structures now support vector-based resource representations.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰 Vectors now dance where scalars once stood
Resource maps guide the scheduler's wood
From pointers to values, the system takes flight
With indexed resource pools burning bright
A migration complete, the logic shines right!

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 5.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive Title is truncated and incomplete. It reads 'refactor(scheduler): migrate NodeInfo, PodInfo, and plugin resource r…' which cuts off mid-word, making it unclear what specific resource concept is being addressed. Complete the PR title to clearly convey the full scope, e.g., 'refactor(scheduler): migrate NodeInfo, PodInfo, and plugin resources to vectors' for clarity.
✅ Passed checks (1 passed)
Check name Status Explanation
Description check ✅ Passed The PR description covers the main changes adequately, explaining the conversion to ResourceVector operations and updated plugin behavior. However, it lacks detailed context for understanding the full scope and rationale for the architectural change.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch erez/migrate-node-info-pod-info-to-vectors

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link

github-actions bot commented Mar 5, 2026

📊 Performance Benchmark Results

Comparing PR (erez/migrate-node-info-pod-info-to-vectors) vs main branch:

main-bench.txt:155: parsing iteration count: invalid syntax
pr-bench.txt:155: parsing iteration count: invalid syntax
goos: linux
goarch: amd64
pkg: github.com/NVIDIA/KAI-scheduler/pkg/scheduler/actions
cpu: AMD EPYC 7763 64-Core Processor                
                                    │ main-bench.txt │            pr-bench.txt            │
                                    │     sec/op     │    sec/op     vs base              │
AllocateAction_SmallCluster-4           108.8m ±  1%   108.9m ±  1%       ~ (p=0.394 n=6)
AllocateAction_MediumCluster-4          139.4m ±  2%   138.9m ±  1%       ~ (p=0.310 n=6)
AllocateAction_LargeCluster-4           226.5m ± 24%   221.4m ± 17%       ~ (p=0.180 n=6)
ReclaimAction_SmallCluster-4            103.0m ±  0%   103.0m ±  0%       ~ (p=1.000 n=6)
ReclaimAction_MediumCluster-4           106.3m ±  1%   106.3m ±  1%       ~ (p=0.589 n=6)
PreemptAction_SmallCluster-4            103.8m ±  0%   103.8m ±  0%       ~ (p=0.699 n=6)
PreemptAction_MediumCluster-4           114.2m ±  0%   113.7m ±  0%  -0.47% (p=0.009 n=6)
ConsolidationAction_SmallCluster-4      115.1m ±  0%   115.3m ±  0%       ~ (p=0.240 n=6)
ConsolidationAction_MediumCluster-4     209.0m ±  2%   206.5m ±  2%       ~ (p=0.132 n=6)
FullSchedulingCycle_SmallCluster-4      105.7m ±  0%   105.7m ±  0%       ~ (p=0.589 n=6)
FullSchedulingCycle_MediumCluster-4     121.0m ±  1%   121.4m ±  0%       ~ (p=0.065 n=6)
FullSchedulingCycle_LargeCluster-4      162.8m ±  1%   163.8m ±  1%       ~ (p=0.132 n=6)
ManyQueues_MediumCluster-4              143.3m ±  0%   143.7m ±  1%  +0.25% (p=0.004 n=6)
GangScheduling_MediumCluster-4          161.2m ±  2%   161.8m ±  3%       ~ (p=0.394 n=6)
geomean                                 132.5m         132.3m        -0.19%

                                    │ main-bench.txt │            pr-bench.txt             │
                                    │      B/op      │     B/op       vs base              │
AllocateAction_SmallCluster-4           2.348Mi ± 0%    2.374Mi ± 0%  +1.11% (p=0.002 n=6)
AllocateAction_MediumCluster-4          12.63Mi ± 0%    12.73Mi ± 0%  +0.79% (p=0.002 n=6)
AllocateAction_LargeCluster-4           43.47Mi ± 0%    43.72Mi ± 0%  +0.57% (p=0.002 n=6)
ReclaimAction_SmallCluster-4            995.0Ki ± 1%   1003.6Ki ± 1%  +0.86% (p=0.026 n=6)
ReclaimAction_MediumCluster-4           3.260Mi ± 0%    3.304Mi ± 0%  +1.35% (p=0.002 n=6)
PreemptAction_SmallCluster-4            1.092Mi ± 0%    1.108Mi ± 1%  +1.44% (p=0.002 n=6)
PreemptAction_MediumCluster-4           4.367Mi ± 0%    4.432Mi ± 0%  +1.50% (p=0.002 n=6)
ConsolidationAction_SmallCluster-4      5.898Mi ± 0%    5.905Mi ± 0%  +0.11% (p=0.009 n=6)
ConsolidationAction_MediumCluster-4     48.41Mi ± 0%    48.21Mi ± 0%  -0.41% (p=0.002 n=6)
FullSchedulingCycle_SmallCluster-4      1.524Mi ± 0%    1.540Mi ± 0%  +1.03% (p=0.002 n=6)
FullSchedulingCycle_MediumCluster-4     7.444Mi ± 0%    7.517Mi ± 0%  +0.98% (p=0.002 n=6)
FullSchedulingCycle_LargeCluster-4      24.31Mi ± 0%    24.49Mi ± 0%  +0.72% (p=0.002 n=6)
ManyQueues_MediumCluster-4              17.09Mi ± 0%    17.20Mi ± 0%  +0.60% (p=0.002 n=6)
GangScheduling_MediumCluster-4          18.54Mi ± 0%    18.72Mi ± 0%  +1.01% (p=0.002 n=6)
geomean                                 6.835Mi         6.892Mi       +0.83%

                                    │ main-bench.txt │           pr-bench.txt            │
                                    │   allocs/op    │  allocs/op   vs base              │
AllocateAction_SmallCluster-4            37.85k ± 0%   38.48k ± 0%  +1.66% (p=0.002 n=6)
AllocateAction_MediumCluster-4           331.8k ± 0%   334.3k ± 0%  +0.77% (p=0.002 n=6)
AllocateAction_LargeCluster-4            1.410M ± 0%   1.417M ± 0%  +0.45% (p=0.002 n=6)
ReclaimAction_SmallCluster-4             9.241k ± 0%   9.419k ± 0%  +1.93% (p=0.002 n=6)
ReclaimAction_MediumCluster-4            29.92k ± 0%   30.67k ± 0%  +2.50% (p=0.002 n=6)
PreemptAction_SmallCluster-4             11.93k ± 0%   12.13k ± 0%  +1.72% (p=0.002 n=6)
PreemptAction_MediumCluster-4            41.73k ± 0%   42.58k ± 0%  +2.03% (p=0.002 n=6)
ConsolidationAction_SmallCluster-4       75.93k ± 0%   77.08k ± 0%  +1.52% (p=0.002 n=6)
ConsolidationAction_MediumCluster-4      697.0k ± 0%   706.9k ± 0%  +1.41% (p=0.002 n=6)
FullSchedulingCycle_SmallCluster-4       22.61k ± 0%   23.01k ± 0%  +1.79% (p=0.002 n=6)
FullSchedulingCycle_MediumCluster-4      179.6k ± 0%   181.3k ± 0%  +0.92% (p=0.002 n=6)
FullSchedulingCycle_LargeCluster-4       739.3k ± 0%   743.4k ± 0%  +0.55% (p=0.002 n=6)
ManyQueues_MediumCluster-4               369.8k ± 0%   372.4k ± 0%  +0.69% (p=0.002 n=6)
GangScheduling_MediumCluster-4           608.7k ± 0%   613.6k ± 0%  +0.81% (p=0.002 n=6)
geomean                                  116.7k        118.3k       +1.34%

pkg: github.com/NVIDIA/KAI-scheduler/pkg/scheduler/actions/integration_tests/reclaim
                            │ main-bench.txt │            pr-bench.txt             │
                            │     sec/op     │    sec/op      vs base              │
ReclaimLargeJobs_10Node-4      105.1m ± 0%     105.2m ± 0%         ~ (p=0.310 n=6)
ReclaimLargeJobs_50Node-4      144.8m ± 1%     144.6m ± 0%         ~ (p=0.818 n=6)
ReclaimLargeJobs_100Node-4     292.8m ± 1%     288.6m ± 1%    -1.43% (p=0.015 n=6)
ReclaimLargeJobs_200Node-4      1.158 ± 2%      1.166 ± 0%         ~ (p=0.240 n=6)
ReclaimLargeJobs_500Node-4      13.68 ± 2%      13.61 ± 2%         ~ (p=0.240 n=6)
ReclaimLargeJobs_1000Node-4     116.2 ±  ∞ ¹    118.3 ±  ∞ ¹  +1.79% (p=0.029 n=4)
geomean                         1.420           1.421         +0.07%
¹ need >= 6 samples for confidence interval at level 0.95

                            │ main-bench.txt │             pr-bench.txt             │
                            │      B/op      │      B/op       vs base              │
ReclaimLargeJobs_10Node-4     1.929Mi ± 3%     1.947Mi ± 3%    +0.95% (p=0.041 n=6)
ReclaimLargeJobs_50Node-4     18.01Mi ± 0%     18.18Mi ± 0%    +0.93% (p=0.002 n=6)
ReclaimLargeJobs_100Node-4    61.22Mi ± 0%     61.50Mi ± 0%    +0.46% (p=0.002 n=6)
ReclaimLargeJobs_200Node-4    241.4Mi ± 0%     240.7Mi ± 0%    -0.28% (p=0.002 n=6)
ReclaimLargeJobs_500Node-4    1.738Gi ± 0%     1.706Gi ± 0%    -1.85% (p=0.002 n=6)
ReclaimLargeJobs_1000Node-4   9.089Gi ±  ∞ ¹   8.779Gi ±  ∞ ¹  -3.41% (p=0.029 n=4)
geomean                       142.9Mi          142.1Mi         -0.55%
¹ need >= 6 samples for confidence interval at level 0.95

                            │ main-bench.txt │            pr-bench.txt             │
                            │   allocs/op    │   allocs/op    vs base              │
ReclaimLargeJobs_10Node-4      20.96k ± 3%     21.30k ± 3%    +1.65% (p=0.032 n=6)
ReclaimLargeJobs_50Node-4      240.5k ± 0%     244.2k ± 0%    +1.52% (p=0.002 n=6)
ReclaimLargeJobs_100Node-4     892.1k ± 0%     908.1k ± 0%    +1.79% (p=0.002 n=6)
ReclaimLargeJobs_200Node-4     3.751M ± 0%     3.843M ± 0%    +2.45% (p=0.002 n=6)
ReclaimLargeJobs_500Node-4     29.81M ± 0%     31.00M ± 0%    +3.99% (p=0.002 n=6)
ReclaimLargeJobs_1000Node-4    165.1M ±  ∞ ¹   174.0M ±  ∞ ¹  +5.44% (p=0.029 n=4)
geomean                        2.089M          2.147M         +2.80%
¹ need >= 6 samples for confidence interval at level 0.95

Legend

  • 📉 Negative delta = Performance improvement (faster)
  • 📈 Positive delta = Performance regression (slower)
  • p-value < 0.05 indicates statistically significant change
Raw benchmark data

PR branch:

goos: linux
goarch: amd64
pkg: github.com/NVIDIA/KAI-scheduler/pkg/scheduler/actions
cpu: AMD EPYC 7763 64-Core Processor                
BenchmarkAllocateAction_SmallCluster-4         	      10	 109731814 ns/op	 2489901 B/op	   38486 allocs/op
BenchmarkAllocateAction_SmallCluster-4         	      10	 109604642 ns/op	 2493638 B/op	   38478 allocs/op
BenchmarkAllocateAction_SmallCluster-4         	      10	 108856636 ns/op	 2487935 B/op	   38477 allocs/op
BenchmarkAllocateAction_SmallCluster-4         	      10	 108541544 ns/op	 2487071 B/op	   38478 allocs/op
BenchmarkAllocateAction_SmallCluster-4         	      10	 108971900 ns/op	 2488412 B/op	   38479 allocs/op
BenchmarkAllocateAction_SmallCluster-4         	      10	 108790941 ns/op	 2500784 B/op	   38475 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 138403281 ns/op	13358264 B/op	  334313 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 139242843 ns/op	13352993 B/op	  334297 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 139009970 ns/op	13351621 B/op	  334303 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 137516173 ns/op	13351205 B/op	  334302 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 138839548 ns/op	13350369 B/op	  334298 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 139809428 ns/op	13350639 B/op	  334303 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       5	 221185048 ns/op	45844451 B/op	 1416707 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       5	 223817901 ns/op	45841796 B/op	 1416682 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       4	 258549450 ns/op	45844812 B/op	 1416712 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       5	 220908801 ns/op	45842696 B/op	 1416689 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       5	 213880077 ns/op	45861814 B/op	 1416675 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       5	 221548442 ns/op	45844102 B/op	 1416699 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 102960225 ns/op	 1026610 B/op	    9392 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 103015047 ns/op	 1027780 B/op	    9408 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 103009152 ns/op	 1027344 B/op	    9419 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 102915305 ns/op	 1027572 B/op	    9419 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 103071934 ns/op	 1031570 B/op	    9421 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 103001440 ns/op	 1040348 B/op	    9423 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 107514054 ns/op	 3461171 B/op	   30673 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 106268245 ns/op	 3468775 B/op	   30674 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 106298442 ns/op	 3464842 B/op	   30673 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 106543495 ns/op	 3468960 B/op	   30675 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 106274300 ns/op	 3464935 B/op	   30673 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 106262236 ns/op	 3464660 B/op	   30672 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 103953674 ns/op	 1161677 B/op	   12133 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 103873222 ns/op	 1161848 B/op	   12134 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 103796748 ns/op	 1153943 B/op	   12130 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 103760464 ns/op	 1157911 B/op	   12133 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 103749262 ns/op	 1164442 B/op	   12132 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 103792205 ns/op	 1161612 B/op	   12134 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 113593080 ns/op	 4647142 B/op	   42578 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 113442021 ns/op	 4651707 B/op	   42580 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 114128117 ns/op	 4642978 B/op	   42577 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 113742935 ns/op	 4647222 B/op	   42578 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 113651900 ns/op	 4651716 B/op	   42581 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 113776278 ns/op	 4643078 B/op	   42578 allocs/op
BenchmarkConsolidationAction_SmallCluster-4    	       9	 115307930 ns/op	 6192178 B/op	   77075 allocs/op
BenchmarkConsolidationAction_SmallCluster-4    	       9	 115352306 ns/op	 6190995 B/op	   77074 allocs/op
BenchmarkConsolidationAction_SmallCluster-4    	       9	 114921345 ns/op	 6186940 B/op	   77052 allocs/op
BenchmarkConsolidationAction_SmallCluster-4    	       9	 115243442 ns/op	 6205297 B/op	   77129 allocs/op

Main branch:

goos: linux
goarch: amd64
pkg: github.com/NVIDIA/KAI-scheduler/pkg/scheduler/actions
cpu: AMD EPYC 7763 64-Core Processor                
BenchmarkAllocateAction_SmallCluster-4         	      10	 109455615 ns/op	 2463460 B/op	   37856 allocs/op
BenchmarkAllocateAction_SmallCluster-4         	      10	 109003191 ns/op	 2461518 B/op	   37848 allocs/op
BenchmarkAllocateAction_SmallCluster-4         	      10	 108952460 ns/op	 2462173 B/op	   37849 allocs/op
BenchmarkAllocateAction_SmallCluster-4         	      10	 108491697 ns/op	 2460213 B/op	   37848 allocs/op
BenchmarkAllocateAction_SmallCluster-4         	      10	 108723414 ns/op	 2462186 B/op	   37847 allocs/op
BenchmarkAllocateAction_SmallCluster-4         	      10	 108412725 ns/op	 2461493 B/op	   37847 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 138730435 ns/op	13246690 B/op	  331757 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 138274370 ns/op	13246855 B/op	  331748 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 139585592 ns/op	13244929 B/op	  331747 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 141195222 ns/op	13250393 B/op	  331752 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 141750957 ns/op	13246173 B/op	  331757 allocs/op
BenchmarkAllocateAction_MediumCluster-4        	       8	 139291619 ns/op	13246047 B/op	  331749 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       5	 227390129 ns/op	45587688 B/op	 1410403 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       5	 230952810 ns/op	45585492 B/op	 1410385 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       5	 281502245 ns/op	45585294 B/op	 1410391 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       5	 225463379 ns/op	45585155 B/op	 1410386 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       5	 225708634 ns/op	45584256 B/op	 1410379 allocs/op
BenchmarkAllocateAction_LargeCluster-4         	       5	 220090881 ns/op	45592377 B/op	 1410379 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 103140976 ns/op	 1011687 B/op	    9210 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 102968328 ns/op	 1017221 B/op	    9232 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 103012526 ns/op	 1020626 B/op	    9241 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 102975766 ns/op	 1020572 B/op	    9241 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 102961654 ns/op	 1016772 B/op	    9240 allocs/op
BenchmarkReclaimAction_SmallCluster-4          	      10	 102967765 ns/op	 1029458 B/op	    9242 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 107046104 ns/op	 3422635 B/op	   29925 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 106350868 ns/op	 3418549 B/op	   29923 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 106332620 ns/op	 3418644 B/op	   29923 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 106311889 ns/op	 3418607 B/op	   29924 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 106623168 ns/op	 3422519 B/op	   29925 allocs/op
BenchmarkReclaimAction_MediumCluster-4         	      10	 105929780 ns/op	 3418504 B/op	   29923 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 103917158 ns/op	 1145269 B/op	   11928 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 103964809 ns/op	 1145285 B/op	   11928 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 103765168 ns/op	 1141440 B/op	   11927 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 103854395 ns/op	 1145135 B/op	   11928 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 103713606 ns/op	 1147661 B/op	   11926 allocs/op
BenchmarkPreemptAction_SmallCluster-4          	      10	 103830253 ns/op	 1141320 B/op	   11927 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 114187282 ns/op	 4578690 B/op	   41729 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 114094431 ns/op	 4582607 B/op	   41729 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 114292483 ns/op	 4578617 B/op	   41729 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 114500761 ns/op	 4578648 B/op	   41729 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 114042707 ns/op	 4578703 B/op	   41729 allocs/op
BenchmarkPreemptAction_MediumCluster-4         	       9	 114274918 ns/op	 4582957 B/op	   41731 allocs/op
BenchmarkConsolidationAction_SmallCluster-4    	       9	 114627018 ns/op	 6185798 B/op	   75932 allocs/op
BenchmarkConsolidationAction_SmallCluster-4    	       9	 115151358 ns/op	 6188431 B/op	   75965 allocs/op
BenchmarkConsolidationAction_SmallCluster-4    	       9	 115143308 ns/op	 6174607 B/op	   75846 allocs/op
BenchmarkConsolidationAction_SmallCluster-4    	       9	 115142001 ns/op	 6188932 B/op	   75875 allocs/op

@enoodle enoodle force-pushed the erez/migrate-node-info-pod-info-to-vectors branch from db49cb7 to 4a7b05d Compare March 5, 2026 12:24
@enoodle
Copy link
Collaborator Author

enoodle commented Mar 5, 2026

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 5, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
pkg/scheduler/actions/common/allocate.go (1)

177-190: ⚠️ Potential issue | 🟡 Minor

Fix log wording to match vector payload.

These logs now print task.ResReqVector (full resource vector), but the message still says “requires … GPUs”. That is misleading during debugging.

Suggested patch
-	log.InfraLogger.V(6).Infof("Binding Task <%v/%v> to node <%v>, requires: %v GPUs",
+	log.InfraLogger.V(6).Infof("Binding Task <%v/%v> to node <%v>, requires resources vector: %v",
 		task.Namespace, task.Name, node.Name, task.ResReqVector)

-	log.InfraLogger.V(6).Infof("Pipelining Task <%v/%v> to node <%v> requires: %v GPUs",
+	log.InfraLogger.V(6).Infof("Pipelining Task <%v/%v> to node <%v> requires resources vector: %v",
 		task.Namespace, task.Name, node.Name, task.ResReqVector)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/actions/common/allocate.go` around lines 177 - 190, Update the
log messages that currently print task.ResReqVector but say "requires: ... GPUs"
to accurately reflect the payload; change the wording in the binding log (the
log right before stmt.Allocate(...) inside the Allocate/Bind function) and in
pipelineTaskToNode(...) so they mention "requires resource vector" or "requires
resources" (or similar) instead of "requires: %v GPUs", keeping the same %v
placeholder for task.ResReqVector.
🧹 Nitpick comments (9)
pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus.go (1)

154-154: Use a single constant for the GPU vector key.

Both Line 154 and Line 177 hardcode "gpu". Centralizing this key reduces typo/drift risk while vector migration is still in progress.

Suggested refactor
 const (
 	idleGpuFilterName               = "AccumulatedIdleGpus"
 	nonAccumulatedScenarioBaseError = "accumulatedIdleGpus requires all the filters scenarios using the same instance to be based on the same scenario with accumulation of potential victims. "
 	requiredResourcesDiffError      = "The pending task %s didn't appear in the scenario given at the AccumulatedIdleGpus ctor"
 	recordedVictimsDiffError        = "The recorded victims should remain the same between the different scenario filtering. %d cache hits, pre update recorded tasks seen %d, post update recorded tasks seen %d"
 	potentialVictimsDiffError       = "The list of potential victims for the current scenario should contain the previous list of potential victims. Only %d out %d tasks are the contained in the current scenario"
+	gpuVectorKey                    = "gpu"
 )
@@
-		requiredResources = append(requiredResources, pod.ResReqVector.Get(pod.VectorMap.GetIndex("gpu")))
+		requiredResources = append(requiredResources, pod.ResReqVector.Get(pod.VectorMap.GetIndex(gpuVectorKey)))
@@
-	ig.nodesNameToIdleGpus[task.NodeName] += task.AcceptedResourceVector.Get(task.VectorMap.GetIndex("gpu"))
+	ig.nodesNameToIdleGpus[task.NodeName] += task.AcceptedResourceVector.Get(task.VectorMap.GetIndex(gpuVectorKey))

Also applies to: 177-177

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus.go`
at line 154, The code hardcodes "gpu" in multiple places (e.g., in the
requiredResources append using
pod.ResReqVector.Get(pod.VectorMap.GetIndex("gpu")) and the other occurrence at
line 177); introduce a single file-level constant (e.g., const gpuVectorKey =
"gpu") and replace all hardcoded "gpu" usages with that constant (use
pod.VectorMap.GetIndex(gpuVectorKey) wherever needed) so the key is centralized
and avoids drift during vector migration.
pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus_test.go (1)

28-28: Prefer per-test vector map instances over a shared package variable.

Line 28 introduces mutable shared test state. A per-test map keeps fixtures isolated and avoids hidden coupling as more vector keys are introduced in future tests.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus_test.go`
at line 28, Replace the package-level mutable testVectorMap with per-test
instances to avoid shared mutable state: remove the var testVectorMap =
resource_info.NewResourceVectorMap() and instantiate
resource_info.NewResourceVectorMap() inside each test (or a test setup helper)
that currently references testVectorMap (search for usages of testVectorMap in
idle_gpus_test.go) so each test gets its own fresh ResourceVectorMap fixture.
pkg/scheduler/plugins/proportion/proportion_test.go (1)

623-627: Reduce duplicated resource fixture setup for ResReq/ResReqVector.

The same requirement is repeatedly parsed twice per pod (once for ResReq, once for ResReqVector). Create the requirement once and derive both fields from it to avoid fixture drift.

♻️ Example pattern
-ResReq:       common_info.BuildResourceRequirements("1", "1G"),
-ResReqVector: common_info.BuildResourceRequirements("1", "1G").ToVector(testVectorMap),
+req := common_info.BuildResourceRequirements("1", "1G")
+ResReq:       req,
+ResReqVector: req.ToVector(testVectorMap),

Also applies to: 634-638, 661-665, 672-676, 686-690, 713-717, 724-728, 738-742, 843-847, 875-879, 881-885, 887-891, 921-925, 927-931, 958-962, 964-968

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/plugins/proportion/proportion_test.go` around lines 623 - 627,
The test fixtures duplicate parsing by calling
common_info.BuildResourceRequirements twice for ResReq and ResReqVector; fix by
creating a single requirement variable (e.g., req :=
common_info.BuildResourceRequirements("2", "2G")) and reuse it for both fields
so ResReq: req and ResReqVector: req.ToVector(testVectorMap); apply the same
refactor to all similar blocks that build resources (lines referenced in the
comment) to avoid fixture drift and keep testVectorMap usage consistent.
pkg/scheduler/plugins/proportion/utils/utils.go (1)

24-24: Add a GoDoc comment for exported QuantifyVector.

Please add a short GoDoc comment starting with QuantifyVector ....

As per coding guidelines "Add GoDoc-style comments for exported functions and types."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/plugins/proportion/utils/utils.go` at line 24, QuantifyVector
is missing a GoDoc comment; add a GoDoc-style comment directly above the
exported function QuantifyVector that begins with "QuantifyVector ..." and
briefly describes what the function does, its parameters (vec and vectorMap) and
the returned rs.ResourceQuantities to satisfy the project's documentation
guidelines.
pkg/scheduler/plugins/proportion/proportion.go (1)

221-239: Refactor getResources to single-pass accumulation and nil-skip.

Current implementation allocates an intermediate slice and then re-iterates. A single pass is simpler and safer if any AcceptedResourceVector is nil.

Suggested patch
 func getResources(ignoreReallocatedTasks bool, pods ...*pod_info.PodInfo) resource_info.ResourceVector {
-	var vectors []resource_info.ResourceVector
+	var total resource_info.ResourceVector
 	for _, task := range pods {
 		if ignoreReallocatedTasks && pod_status.IsActiveAllocatedStatus(task.Status) {
 			continue
 		}
-		vectors = append(vectors, task.AcceptedResourceVector)
-	}
-
-	if len(vectors) == 0 {
-		return nil
-	}
-
-	total := vectors[0].Clone()
-	for _, vec := range vectors[1:] {
+		vec := task.AcceptedResourceVector
+		if vec == nil {
+			continue
+		}
+		if total == nil {
+			total = vec.Clone()
+			continue
+		}
 		total.Add(vec)
 	}
 
 	return total
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/plugins/proportion/proportion.go` around lines 221 - 239, The
getResources function currently builds an intermediate slice then sums it;
instead, iterate once over pods, skip tasks where ignoreReallocatedTasks &&
pod_status.IsActiveAllocatedStatus(task.Status) and also skip when
task.AcceptedResourceVector is nil, and accumulate directly into a single
resource_info.ResourceVector named total by setting total =
task.AcceptedResourceVector.Clone() for the first non-nil vector and
total.Add(vec) for subsequent ones; if no non-nil vectors are seen return nil.
Use the existing function name getResources, types resource_info.ResourceVector
and pod_info.PodInfo, and the AcceptedResourceVector field/Clone/Add methods
referenced in the diff.
pkg/scheduler/cache/cluster_info/cluster_info.go (1)

285-285: Replace the hardcoded GPU key with the shared constant.

Using a string literal here is brittle when a canonical constant already exists in this file.

♻️ Proposed cleanup
-			if nodeInfo.AllocatableVector.Get(nodeInfo.VectorMap.GetIndex("gpu")) > 0 {
+			if nodeInfo.AllocatableVector.Get(nodeInfo.VectorMap.GetIndex(constants.GpuResource)) > 0 {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/cache/cluster_info/cluster_info.go` at line 285, The code uses
a hardcoded string literal "gpu" in nodeInfo.VectorMap.GetIndex("gpu") when
checking nodeInfo.AllocatableVector.Get(...); replace that literal with the
package-level/shared GPU constant (use the existing canonical constant in this
file, e.g., GPUResourceKey) so the check reads
nodeInfo.VectorMap.GetIndex(GPUResourceKey) (keep using nodeInfo,
AllocatableVector.Get and VectorMap.GetIndex as-is).
pkg/scheduler/plugins/topology/job_filtering.go (1)

113-124: Extract duplicated task-vector aggregation into one helper.

The same accumulation logic exists in getTasksAllocationMetadata and sortTreeFromRoot; consolidating it will reduce drift risk.

Also applies to: 431-440

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/plugins/topology/job_filtering.go` around lines 113 - 124, The
aggregation logic that builds a combined ResourceVector and task count is
duplicated in getTasksAllocationMetadata and in sortTreeFromRoot; extract it
into a single helper (e.g., aggregateTasksResources or
computeTasksResourceSummary) that accepts []*pod_info.PodInfo and returns
(resource_info.ResourceVector, int), move the loop that clones the first
task.ResReqVector and adds subsequent vectors into that helper, and replace the
bodies of getTasksAllocationMetadata and the code in sortTreeFromRoot (and the
duplicate at lines ~431-440) to call the new helper to obtain tasksResources and
tasksCount.
pkg/scheduler/plugins/proportion/reclaimable/strategies/strategies.go (1)

18-24: Add a GoDoc comment for exported FitsReclaimStrategy.

The exported function signature was updated, but it still has no GoDoc comment.

As per coding guidelines "Add GoDoc-style comments for exported functions and types".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/plugins/proportion/reclaimable/strategies/strategies.go` around
lines 18 - 24, Add a GoDoc comment for the exported function FitsReclaimStrategy
that briefly describes what the function does and explains its parameters and
return value: mention that it determines whether a reclaimer
(reclaimerResources) can reclaim resources from a reclaimee given the resource
vector map (vectorMap), queue attributes for reclaimer and reclaimee
(reclaimerQueue, reclaimeeQueue) and the reclaimee's remaining share
(reclaimeeRemainingShare), and that it returns a bool indicating whether the
reclaim fits; place the comment immediately above the FitsReclaimStrategy
declaration and follow GoDoc conventions (starts with the function name and is a
complete sentence).
pkg/scheduler/plugins/proportion/reclaimable/reclaimable.go (1)

6-20: Reorder imports into standard / external / internal groups.

k8s.io/api/core/v1 is an external dependency and should be separated from internal imports.

♻️ Suggested import grouping
 import (
 	"fmt"
 	"maps"
 	"math"
 	"strings"
 
+	v1 "k8s.io/api/core/v1"
+
 	commonconstants "github.com/NVIDIA/KAI-scheduler/pkg/common/constants"
 	"github.com/NVIDIA/KAI-scheduler/pkg/scheduler/api/common_info"
 	"github.com/NVIDIA/KAI-scheduler/pkg/scheduler/api/resource_info"
 	"github.com/NVIDIA/KAI-scheduler/pkg/scheduler/log"
 	"github.com/NVIDIA/KAI-scheduler/pkg/scheduler/plugins/proportion/reclaimable/strategies"
 	rs "github.com/NVIDIA/KAI-scheduler/pkg/scheduler/plugins/proportion/resource_share"
 	"github.com/NVIDIA/KAI-scheduler/pkg/scheduler/plugins/proportion/utils"
-	v1 "k8s.io/api/core/v1"
 )

As per coding guidelines "Organize imports in three groups separated by blank lines: (1) Standard library, (2) External dependencies, (3) Internal packages".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/plugins/proportion/reclaimable/reclaimable.go` around lines 6 -
20, Reorder the import block in reclaimable.go into three groups separated by
blank lines: (1) standard library imports (fmt, maps, math, strings), (2)
external dependencies (v1 "k8s.io/api/core/v1"), and (3) internal packages
(commonconstants, common_info, resource_info, log, strategies, rs, utils) while
preserving existing aliases (e.g., v1 and rs) and import order within each group
to satisfy the project's import grouping guideline.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/scheduler/api/common_info/pod_errors.go`:
- Around line 80-83: The code reads vectors using indices from
vectorMap.GetIndex without checking for -1; update the logic around the uses of
vectorMap.GetIndex for migProfile (and the analogous scalar/resource access
later) to first capture the index (e.g., migIdx :=
vectorMap.GetIndex(string(migProfile))) and guard with if migIdx >= 0 before
calling availableVector.Get(migIdx) or capacityVector.Get(migIdx); if the index
is < 0, return or surface an explicit error/log indicating the resource/MIG
profile is unregistered (so you don't silently treat it as zero), mirroring the
existing pattern used elsewhere in resource_vector.go.

In `@pkg/scheduler/api/node_info/node_info.go`:
- Around line 394-403: The helper lessEqualVectorsExcludingGPU mutates its input
ResourceVector values (a and b) which can race; change it to a non-mutating
comparison: get gpuIdx via ni.VectorMap.GetIndex(commonconstants.GpuResource)
and then loop over all valid indices (or use a length accessor) comparing
a.Get(i) <= b.Get(i) for each i except gpuIdx, returning false on the first
violation and true otherwise. Ensure you reference and use only
ResourceVector.Get and the VectorMap/GetIndex (no Set calls) so neither
task.ResReqVector nor node vectors are modified during the check.

In `@pkg/scheduler/plugins/proportion/capacity_policy/capacity_policy.go`:
- Around line 79-83: getRequiredQuota iterates tasksToAllocate and blindly calls
utils.QuantifyVector(pod.ResReqVector, pod.VectorMap) which will nil-deref for
pods that haven't initialized vector fields; add a defensive check in
getRequiredQuota: if pod.ResReqVector==nil || pod.VectorMap==nil then compute
quantities from the pod's scalar resource fields (e.g., CPU/milliCPU, Memory,
GPU scalar fields the pod struct exposes) and add those to quota.GPU,
quota.MilliCPU and quota.Memory, otherwise call utils.QuantifyVector as before;
update variable names (quantities, quota.GPU/MilliCPU/Memory) accordingly so
behavior is identical for vectorized pods.

In `@pkg/scheduler/plugins/proportion/proportion_test.go`:
- Around line 811-817: The node's VectorMap and AllocatableVector are being
built from a newly created vectorMap, which can diverge from the test fixtures'
testVectorMap and break vector index alignment; change the setup so
testData.node.VectorMap is assigned the existing testVectorMap (use
testVectorMap instead of a locally created vectorMap) and compute
testData.node.AllocatableVector via
testData.node.Allocatable.ToVector(testVectorMap) so getNodeResources and pod
vectors share the same ResourceVectorMap instance.

In `@pkg/scheduler/plugins/proportion/reclaimable/reclaimable.go`:
- Around line 269-289: getInvolvedResourcesNames currently only checks the
GpuResource slot; add a check for the MIG-backed GPU slot so MIG-only vectors
also mark GPU as involved: retrieve the MIG index from vectorMap (similar to
cpuIdx/memIdx/gpuIdx, e.g. migIdx :=
vectorMap.GetIndex(commonconstants.MigResource) or the project's MIG constant),
ensure the index is valid, and if vec.Get(migIdx) > 0 set
involvedResources[rs.GpuResource] = struct{}{} (keep the existing checks for
cpuIdx/memIdx/gpuIdx intact in the getInvolvedResourcesNames function).

In `@pkg/scheduler/plugins/topology/job_filtering_test.go`:
- Around line 31-32: Tests currently mix a global testVectorMap (created via
resource_info.NewResourceVectorMap()) with per-tree VectorMap instances causing
inconsistent vector/indices; replace the global testVectorMap by creating a
single VectorMap inside each test and pass that same VectorMap instance to every
topology/tree creation in that test (instead of creating new maps inside
functions), update usages referencing testVectorMap so all calls use the local
VectorMap, and remove any other per-tree NewResourceVectorMap() calls so each
test runs end-to-end with one consistent VectorMap.

In `@pkg/scheduler/plugins/topology/topology_plugin.go`:
- Around line 59-64: If nodes can be empty, ensure sharedVectorMap is
initialized to a non-nil default instead of staying nil: after the loop that
sets sharedVectorMap from nodeInfo.VectorMap, check if sharedVectorMap == nil
and, if so, create and assign a new resource_info.ResourceVectorMap (or call the
existing constructor/helper) before it is used or assigned into topology trees
(references: sharedVectorMap, nodes, resource_info.ResourceVectorMap); this
prevents nil derefs when later converting/indexing vectors.

---

Outside diff comments:
In `@pkg/scheduler/actions/common/allocate.go`:
- Around line 177-190: Update the log messages that currently print
task.ResReqVector but say "requires: ... GPUs" to accurately reflect the
payload; change the wording in the binding log (the log right before
stmt.Allocate(...) inside the Allocate/Bind function) and in
pipelineTaskToNode(...) so they mention "requires resource vector" or "requires
resources" (or similar) instead of "requires: %v GPUs", keeping the same %v
placeholder for task.ResReqVector.

---

Nitpick comments:
In
`@pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus_test.go`:
- Line 28: Replace the package-level mutable testVectorMap with per-test
instances to avoid shared mutable state: remove the var testVectorMap =
resource_info.NewResourceVectorMap() and instantiate
resource_info.NewResourceVectorMap() inside each test (or a test setup helper)
that currently references testVectorMap (search for usages of testVectorMap in
idle_gpus_test.go) so each test gets its own fresh ResourceVectorMap fixture.

In
`@pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus.go`:
- Line 154: The code hardcodes "gpu" in multiple places (e.g., in the
requiredResources append using
pod.ResReqVector.Get(pod.VectorMap.GetIndex("gpu")) and the other occurrence at
line 177); introduce a single file-level constant (e.g., const gpuVectorKey =
"gpu") and replace all hardcoded "gpu" usages with that constant (use
pod.VectorMap.GetIndex(gpuVectorKey) wherever needed) so the key is centralized
and avoids drift during vector migration.

In `@pkg/scheduler/cache/cluster_info/cluster_info.go`:
- Line 285: The code uses a hardcoded string literal "gpu" in
nodeInfo.VectorMap.GetIndex("gpu") when checking
nodeInfo.AllocatableVector.Get(...); replace that literal with the
package-level/shared GPU constant (use the existing canonical constant in this
file, e.g., GPUResourceKey) so the check reads
nodeInfo.VectorMap.GetIndex(GPUResourceKey) (keep using nodeInfo,
AllocatableVector.Get and VectorMap.GetIndex as-is).

In `@pkg/scheduler/plugins/proportion/proportion_test.go`:
- Around line 623-627: The test fixtures duplicate parsing by calling
common_info.BuildResourceRequirements twice for ResReq and ResReqVector; fix by
creating a single requirement variable (e.g., req :=
common_info.BuildResourceRequirements("2", "2G")) and reuse it for both fields
so ResReq: req and ResReqVector: req.ToVector(testVectorMap); apply the same
refactor to all similar blocks that build resources (lines referenced in the
comment) to avoid fixture drift and keep testVectorMap usage consistent.

In `@pkg/scheduler/plugins/proportion/proportion.go`:
- Around line 221-239: The getResources function currently builds an
intermediate slice then sums it; instead, iterate once over pods, skip tasks
where ignoreReallocatedTasks && pod_status.IsActiveAllocatedStatus(task.Status)
and also skip when task.AcceptedResourceVector is nil, and accumulate directly
into a single resource_info.ResourceVector named total by setting total =
task.AcceptedResourceVector.Clone() for the first non-nil vector and
total.Add(vec) for subsequent ones; if no non-nil vectors are seen return nil.
Use the existing function name getResources, types resource_info.ResourceVector
and pod_info.PodInfo, and the AcceptedResourceVector field/Clone/Add methods
referenced in the diff.

In `@pkg/scheduler/plugins/proportion/reclaimable/reclaimable.go`:
- Around line 6-20: Reorder the import block in reclaimable.go into three groups
separated by blank lines: (1) standard library imports (fmt, maps, math,
strings), (2) external dependencies (v1 "k8s.io/api/core/v1"), and (3) internal
packages (commonconstants, common_info, resource_info, log, strategies, rs,
utils) while preserving existing aliases (e.g., v1 and rs) and import order
within each group to satisfy the project's import grouping guideline.

In `@pkg/scheduler/plugins/proportion/reclaimable/strategies/strategies.go`:
- Around line 18-24: Add a GoDoc comment for the exported function
FitsReclaimStrategy that briefly describes what the function does and explains
its parameters and return value: mention that it determines whether a reclaimer
(reclaimerResources) can reclaim resources from a reclaimee given the resource
vector map (vectorMap), queue attributes for reclaimer and reclaimee
(reclaimerQueue, reclaimeeQueue) and the reclaimee's remaining share
(reclaimeeRemainingShare), and that it returns a bool indicating whether the
reclaim fits; place the comment immediately above the FitsReclaimStrategy
declaration and follow GoDoc conventions (starts with the function name and is a
complete sentence).

In `@pkg/scheduler/plugins/proportion/utils/utils.go`:
- Line 24: QuantifyVector is missing a GoDoc comment; add a GoDoc-style comment
directly above the exported function QuantifyVector that begins with
"QuantifyVector ..." and briefly describes what the function does, its
parameters (vec and vectorMap) and the returned rs.ResourceQuantities to satisfy
the project's documentation guidelines.

In `@pkg/scheduler/plugins/topology/job_filtering.go`:
- Around line 113-124: The aggregation logic that builds a combined
ResourceVector and task count is duplicated in getTasksAllocationMetadata and in
sortTreeFromRoot; extract it into a single helper (e.g., aggregateTasksResources
or computeTasksResourceSummary) that accepts []*pod_info.PodInfo and returns
(resource_info.ResourceVector, int), move the loop that clones the first
task.ResReqVector and adds subsequent vectors into that helper, and replace the
bodies of getTasksAllocationMetadata and the code in sortTreeFromRoot (and the
duplicate at lines ~431-440) to call the new helper to obtain tasksResources and
tasksCount.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 3560b2aa-a32c-4a45-a10b-4d963473d8f0

📥 Commits

Reviewing files that changed from the base of the PR and between 49dd391 and 4a7b05d.

📒 Files selected for processing (38)
  • pkg/scheduler/actions/common/allocate.go
  • pkg/scheduler/actions/common/feasible_nodes_test.go
  • pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus.go
  • pkg/scheduler/actions/common/solvers/accumulated_scenario_filters/idle_gpus/idle_gpus_test.go
  • pkg/scheduler/actions/common/solvers/pod_scenario_builder_test.go
  • pkg/scheduler/actions/utils/job_order_by_queue_test.go
  • pkg/scheduler/api/common_info/job_errors.go
  • pkg/scheduler/api/common_info/job_errors_test.go
  • pkg/scheduler/api/common_info/pod_errors.go
  • pkg/scheduler/api/common_info/pod_errors_test.go
  • pkg/scheduler/api/node_info/gpu_sharing_node_info.go
  • pkg/scheduler/api/node_info/node_info.go
  • pkg/scheduler/api/node_info/node_info_test.go
  • pkg/scheduler/api/pod_info/pod_info.go
  • pkg/scheduler/cache/cluster_info/cluster_info.go
  • pkg/scheduler/framework/session.go
  • pkg/scheduler/framework/statement.go
  • pkg/scheduler/plugins/nodeavailability/nodeavailability.go
  • pkg/scheduler/plugins/nodeplacement/nodepack_test.go
  • pkg/scheduler/plugins/nodeplacement/nodespread_test.go
  • pkg/scheduler/plugins/nodeplacement/pack.go
  • pkg/scheduler/plugins/nodeplacement/spread.go
  • pkg/scheduler/plugins/proportion/capacity_policy/capacity_policy.go
  • pkg/scheduler/plugins/proportion/capacity_policy/capacity_policy_test.go
  • pkg/scheduler/plugins/proportion/proportion.go
  • pkg/scheduler/plugins/proportion/proportion_test.go
  • pkg/scheduler/plugins/proportion/reclaimable/reclaimable.go
  • pkg/scheduler/plugins/proportion/reclaimable/reclaimable_test.go
  • pkg/scheduler/plugins/proportion/reclaimable/reclaimer_info.go
  • pkg/scheduler/plugins/proportion/reclaimable/strategies/strategies.go
  • pkg/scheduler/plugins/proportion/reclaimable/strategies/strategies_test.go
  • pkg/scheduler/plugins/proportion/utils/utils.go
  • pkg/scheduler/plugins/resourcetype/resourcetype.go
  • pkg/scheduler/plugins/topology/job_filtering.go
  • pkg/scheduler/plugins/topology/job_filtering_test.go
  • pkg/scheduler/plugins/topology/node_scoring_test.go
  • pkg/scheduler/plugins/topology/topology_plugin.go
  • pkg/scheduler/plugins/topology/topology_structs.go

Comment on lines +80 to 83
migIdx := vectorMap.GetIndex(string(migProfile))
availableMigProfilesQuant := int64(availableVector.Get(migIdx))
capacityMigProfilesQuant := int64(capacityVector.Get(migIdx))
if availableMigProfilesQuant < quant {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Finding ResourceVectorMap/Get implementation ==="
fd resource_vector.go -t f 2>/dev/null || echo "File not found, searching in pkg/scheduler"
find . -name "resource_vector.go" -type f 2>/dev/null | head -5

echo
echo "=== Finding pod_errors.go ==="
find . -name "pod_errors.go" -type f 2>/dev/null | head -5

Repository: NVIDIA/KAI-Scheduler

Length of output: 297


🏁 Script executed:

#!/bin/bash
# Let's search for the resource_vector.go file and examine GetIndex and Get implementations
rg -l "ResourceVectorMap|ResourceVector" --type go | head -10

Repository: NVIDIA/KAI-Scheduler

Length of output: 590


🏁 Script executed:

#!/bin/bash
# Search for GetIndex and Get method implementations
rg -n "func.*GetIndex\(" --type go -A 5
rg -n "func.*\.Get\(" --type go -A 5 | head -50

Repository: NVIDIA/KAI-Scheduler

Length of output: 1284


🏁 Script executed:

#!/bin/bash
# Read the actual pod_errors.go file to see lines 80-83 and 137-145
cat pkg/scheduler/api/common_info/pod_errors.go | head -150

Repository: NVIDIA/KAI-Scheduler

Length of output: 5664


🏁 Script executed:

#!/bin/bash
# Let's find the ResourceVector.Get implementation
rg -n "func.*ResourceVector.*Get" --type go -A 10 | grep -A 10 "func (.*ResourceVector) Get"

Repository: NVIDIA/KAI-Scheduler

Length of output: 899


🏁 Script executed:

#!/bin/bash
# Check what Get returns for invalid index
rg -n "Get\(" pkg/scheduler/api/resource_info/resource_vector.go -B 2 -A 10 | head -60

Repository: NVIDIA/KAI-Scheduler

Length of output: 2240


🏁 Script executed:

#!/bin/bash
# Examine the ResourceVector type and Get method more carefully
sed -n '1,300p' pkg/scheduler/api/resource_info/resource_vector.go | tail -150

Repository: NVIDIA/KAI-Scheduler

Length of output: 4369


🏁 Script executed:

#!/bin/bash
# Check how GetIndex followed by Get is used elsewhere in the codebase
rg -n "GetIndex.*\(" --type go -A 3 | grep -A 3 "\.Get(" | head -100

Repository: NVIDIA/KAI-Scheduler

Length of output: 7695


🏁 Script executed:

#!/bin/bash
# Check for explicit index guard patterns before Get calls
rg -n "if.*GetIndex.*>=" --type go -B 2 -A 2

Repository: NVIDIA/KAI-Scheduler

Length of output: 1298


Add guards before reading missing resource indices from vectors.

GetIndex returns -1 for unregistered resources, and while Get(-1) safely returns 0 (no panic risk), this masks resource accounting errors. When a MIG profile or scalar resource is requested but absent from the vectorMap, the code silently treats it as 0 available, which can produce incorrect error messages.

This pattern is inconsistent with other vector access in resource_vector.go (lines 211–212, 233–234, 238–240), where explicit if idx >= 0 guards precede vector operations. For consistency and to catch misconfigurations early, add similar guards at lines 80 and 137.

Suggested hardening pattern
-			migIdx := vectorMap.GetIndex(string(migProfile))
-			availableMigProfilesQuant := int64(availableVector.Get(migIdx))
-			capacityMigProfilesQuant := int64(capacityVector.Get(migIdx))
+			migIdx := vectorMap.GetIndex(string(migProfile))
+			if migIdx >= 0 {
+				availableMigProfilesQuant := int64(availableVector.Get(migIdx))
+				capacityMigProfilesQuant := int64(capacityVector.Get(migIdx))
 			if availableMigProfilesQuant < quant {
-		scalarIdx := vectorMap.GetIndex(string(requestedResourceName))
-		availableResourceQuant := int64(availableVector.Get(scalarIdx))
-		capacityResourceQuant := int64(capacityVector.Get(scalarIdx))
+		scalarIdx := vectorMap.GetIndex(string(requestedResourceName))
+		if scalarIdx >= 0 {
+			availableResourceQuant := int64(availableVector.Get(scalarIdx))
+			capacityResourceQuant := int64(capacityVector.Get(scalarIdx))
 		if availableResourceQuant < requestedResourceQuant {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/api/common_info/pod_errors.go` around lines 80 - 83, The code
reads vectors using indices from vectorMap.GetIndex without checking for -1;
update the logic around the uses of vectorMap.GetIndex for migProfile (and the
analogous scalar/resource access later) to first capture the index (e.g., migIdx
:= vectorMap.GetIndex(string(migProfile))) and guard with if migIdx >= 0 before
calling availableVector.Get(migIdx) or capacityVector.Get(migIdx); if the index
is < 0, return or surface an explicit error/log indicating the resource/MIG
profile is unregistered (so you don't silently treat it as zero), mirroring the
existing pattern used elsewhere in resource_vector.go.

Comment on lines +394 to +403
func (ni *NodeInfo) lessEqualVectorsExcludingGPU(a, b resource_info.ResourceVector) bool {
gpuIdx := ni.VectorMap.GetIndex(commonconstants.GpuResource)
savedA := a.Get(gpuIdx)
savedB := b.Get(gpuIdx)
a.Set(gpuIdx, 0)
b.Set(gpuIdx, 0)
result := a.LessEqual(b)
a.Set(gpuIdx, savedA)
b.Set(gpuIdx, savedB)
return result
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Avoid mutating shared vectors in resource comparisons.

This helper temporarily writes into both inputs (a and b). That mutates task.ResReqVector and node vectors during feasibility checks and can race under concurrent node evaluation.

Proposed non-mutating fix
 func (ni *NodeInfo) lessEqualVectorsExcludingGPU(a, b resource_info.ResourceVector) bool {
 	gpuIdx := ni.VectorMap.GetIndex(commonconstants.GpuResource)
-	savedA := a.Get(gpuIdx)
-	savedB := b.Get(gpuIdx)
-	a.Set(gpuIdx, 0)
-	b.Set(gpuIdx, 0)
-	result := a.LessEqual(b)
-	a.Set(gpuIdx, savedA)
-	b.Set(gpuIdx, savedB)
-	return result
+	if len(a) != len(b) {
+		return false
+	}
+	for i := 0; i < len(a); i++ {
+		if i == gpuIdx {
+			continue
+		}
+		if a.Get(i) > b.Get(i) {
+			return false
+		}
+	}
+	return true
 }
Based on learnings: In docs/developer/designs/vectorizing-resources/README.md, “simultaneously” refers to goroutine-based concurrent execution where different goroutines process different nodes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/api/node_info/node_info.go` around lines 394 - 403, The helper
lessEqualVectorsExcludingGPU mutates its input ResourceVector values (a and b)
which can race; change it to a non-mutating comparison: get gpuIdx via
ni.VectorMap.GetIndex(commonconstants.GpuResource) and then loop over all valid
indices (or use a length accessor) comparing a.Get(i) <= b.Get(i) for each i
except gpuIdx, returning false on the first violation and true otherwise. Ensure
you reference and use only ResourceVector.Get and the VectorMap/GetIndex (no Set
calls) so neither task.ResReqVector nor node vectors are modified during the
check.

Comment on lines 79 to +83
for _, pod := range tasksToAllocate {
quota.GPU += pod.ResReq.GetGpusQuota()
quota.MilliCPU += pod.ResReq.Cpu()
quota.Memory += pod.ResReq.Memory()
quantities := utils.QuantifyVector(pod.ResReqVector, pod.VectorMap)
quota.GPU += quantities[rs.GpuResource]
quota.MilliCPU += quantities[rs.CpuResource]
quota.Memory += quantities[rs.MemoryResource]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add fallback for pods without vector fields initialized.

getRequiredQuota now assumes ResReqVector and VectorMap are always present. Add a scalar fallback to avoid nil dereference / bad quota in transitional paths.

🛠️ Suggested defensive fallback
 func getRequiredQuota(tasksToAllocate []*pod_info.PodInfo) *podgroup_info.JobRequirement {
 	quota := podgroup_info.JobRequirement{}
 	for _, pod := range tasksToAllocate {
-		quantities := utils.QuantifyVector(pod.ResReqVector, pod.VectorMap)
+		var quantities rs.ResourceQuantities
+		if pod.VectorMap == nil || pod.ResReqVector == nil {
+			if pod.ResReq == nil {
+				continue
+			}
+			quantities = utils.QuantifyResourceRequirements(pod.ResReq)
+		} else {
+			quantities = utils.QuantifyVector(pod.ResReqVector, pod.VectorMap)
+		}
 		quota.GPU += quantities[rs.GpuResource]
 		quota.MilliCPU += quantities[rs.CpuResource]
 		quota.Memory += quantities[rs.MemoryResource]
 	}
 	return &quota
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/plugins/proportion/capacity_policy/capacity_policy.go` around
lines 79 - 83, getRequiredQuota iterates tasksToAllocate and blindly calls
utils.QuantifyVector(pod.ResReqVector, pod.VectorMap) which will nil-deref for
pods that haven't initialized vector fields; add a defensive check in
getRequiredQuota: if pod.ResReqVector==nil || pod.VectorMap==nil then compute
quantities from the pod's scalar resource fields (e.g., CPU/milliCPU, Memory,
GPU scalar fields the pod struct exposes) and add those to quota.GPU,
quota.MilliCPU and quota.Memory, otherwise call utils.QuantifyVector as before;
update variable names (quantities, quota.GPU/MilliCPU/Memory) accordingly so
behavior is identical for vectorized pods.

Comment on lines +811 to 817
vectorMap := resource_info.NewResourceVectorMap()
for rName := range testData.node.Allocatable.ScalarResources() {
vectorMap.AddResource(string(rName))
}
testData.node.VectorMap = vectorMap
testData.node.AllocatableVector = testData.node.Allocatable.ToVector(vectorMap)
if got := getNodeResources(session, testData.node); !reflect.DeepEqual(got, testData.want) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Ensure node and pod vectors use the same ResourceVectorMap instance.

In Line 811-Line 817, testData.node.VectorMap is local, while several pod fixtures in the same tests use testVectorMap. This can silently misalign indices if either map gains additional resources, making vector math assertions brittle.

💡 Suggested fix
 vectorMap := resource_info.NewResourceVectorMap()
 for rName := range testData.node.Allocatable.ScalarResources() {
 	vectorMap.AddResource(string(rName))
 }
 testData.node.VectorMap = vectorMap
 testData.node.AllocatableVector = testData.node.Allocatable.ToVector(vectorMap)
+for _, p := range testData.node.PodInfos {
+	if p != nil && p.ResReq != nil {
+		p.VectorMap = vectorMap
+		p.ResReqVector = p.ResReq.ToVector(vectorMap)
+	}
+}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/plugins/proportion/proportion_test.go` around lines 811 - 817,
The node's VectorMap and AllocatableVector are being built from a newly created
vectorMap, which can diverge from the test fixtures' testVectorMap and break
vector index alignment; change the setup so testData.node.VectorMap is assigned
the existing testVectorMap (use testVectorMap instead of a locally created
vectorMap) and compute testData.node.AllocatableVector via
testData.node.Allocatable.ToVector(testVectorMap) so getNodeResources and pod
vectors share the same ResourceVectorMap instance.

Comment on lines +269 to 289
func getInvolvedResourcesNames(resources []resource_info.ResourceVector, vectorMap *resource_info.ResourceVectorMap) map[rs.ResourceName]any {
involvedResources := map[rs.ResourceName]any{}
for _, resource := range resources {
if resource == nil {
cpuIdx := vectorMap.GetIndex(string(v1.ResourceCPU))
memIdx := vectorMap.GetIndex(string(v1.ResourceMemory))
gpuIdx := vectorMap.GetIndex(commonconstants.GpuResource)

for _, vec := range resources {
if vec == nil {
continue
}

if resource.Cpu() > 0 {
if vec.Get(cpuIdx) > 0 {
involvedResources[rs.CpuResource] = struct{}{}
}

if resource.Memory() > 0 {
if vec.Get(memIdx) > 0 {
involvedResources[rs.MemoryResource] = struct{}{}
}

if resource.GPUs() > 0 {
if vec.Get(gpuIdx) > 0 {
involvedResources[rs.GpuResource] = struct{}{}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Include MIG-backed GPU demand when collecting involved resources.

getInvolvedResourcesNames only marks GPU involvement from the GpuResource slot. MIG-only vectors will not mark GPU as involved, so GPU saturation checks can be skipped even though utils.QuantifyVector counts MIG toward GPU quota.

🐛 Suggested fix
 func getInvolvedResourcesNames(resources []resource_info.ResourceVector, vectorMap *resource_info.ResourceVectorMap) map[rs.ResourceName]any {
 	involvedResources := map[rs.ResourceName]any{}
-	cpuIdx := vectorMap.GetIndex(string(v1.ResourceCPU))
-	memIdx := vectorMap.GetIndex(string(v1.ResourceMemory))
-	gpuIdx := vectorMap.GetIndex(commonconstants.GpuResource)
 
 	for _, vec := range resources {
 		if vec == nil {
 			continue
 		}
 
-		if vec.Get(cpuIdx) > 0 {
+		quantified := utils.QuantifyVector(vec, vectorMap)
+		if quantified[rs.CpuResource] > 0 {
 			involvedResources[rs.CpuResource] = struct{}{}
 		}
 
-		if vec.Get(memIdx) > 0 {
+		if quantified[rs.MemoryResource] > 0 {
 			involvedResources[rs.MemoryResource] = struct{}{}
 		}
 
-		if vec.Get(gpuIdx) > 0 {
+		if quantified[rs.GpuResource] > 0 {
 			involvedResources[rs.GpuResource] = struct{}{}
 		}
 	}
 
 	return involvedResources
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/plugins/proportion/reclaimable/reclaimable.go` around lines 269
- 289, getInvolvedResourcesNames currently only checks the GpuResource slot; add
a check for the MIG-backed GPU slot so MIG-only vectors also mark GPU as
involved: retrieve the MIG index from vectorMap (similar to
cpuIdx/memIdx/gpuIdx, e.g. migIdx :=
vectorMap.GetIndex(commonconstants.MigResource) or the project's MIG constant),
ensure the index is valid, and if vec.Get(migIdx) > 0 set
involvedResources[rs.GpuResource] = struct{}{} (keep the existing checks for
cpuIdx/memIdx/gpuIdx intact in the getInvolvedResourcesNames function).

Comment on lines +31 to +32
var testVectorMap = resource_info.NewResourceVectorMap()

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use a single VectorMap per test topology end-to-end.

This file mixes a global testVectorMap with per-tree VectorMap instances. That can make vectors and indices inconsistent, creating order-dependent/false-positive tests.

✅ Suggested direction
-var testVectorMap = resource_info.NewResourceVectorMap()
+func newTestVectorMap() *resource_info.ResourceVectorMap {
+	return resource_info.NewResourceVectorMap()
+}
-			setupTopologyTree: func() *Info {
+			setupTopologyTree: func() *Info {
+				vm := newTestVectorMap()
 				tree := &Info{
-					Name:      "test-topology",
-					VectorMap: resource_info.NewResourceVectorMap(),
+					Name:      "test-topology",
+					VectorMap: vm,
 ...
-								IdleOrReleasingVector: resource_info.NewResource(0, 0, 0).ToVector(testVectorMap),
+								IdleOrReleasingVector: resource_info.NewResource(0, 0, 0).ToVector(vm),
-				tt.job.RootSubGroupSet.GetAllPodSets(), tasksResources.ToVector(testVectorMap), tasksCount,
+				tt.job.RootSubGroupSet.GetAllPodSets(), tasksResources.ToVector(tt.topologyTree.VectorMap), tasksCount,

Also applies to: 92-93, 107-108, 2126-2126

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/plugins/topology/job_filtering_test.go` around lines 31 - 32,
Tests currently mix a global testVectorMap (created via
resource_info.NewResourceVectorMap()) with per-tree VectorMap instances causing
inconsistent vector/indices; replace the global testVectorMap by creating a
single VectorMap inside each test and pass that same VectorMap instance to every
topology/tree creation in that test (instead of creating new maps inside
functions), update usages referencing testVectorMap so all calls use the local
VectorMap, and remove any other per-tree NewResourceVectorMap() calls so each
test runs end-to-end with one consistent VectorMap.

Comment on lines +59 to +64
// Get VectorMap from any node (all share the same map)
var sharedVectorMap *resource_info.ResourceVectorMap
for _, nodeInfo := range nodes {
sharedVectorMap = nodeInfo.VectorMap
break
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Ensure VectorMap is never nil when no nodes exist.

If nodes is empty, sharedVectorMap stays nil and gets assigned into every topology tree. Any later vector conversion/indexing against that map can fail at runtime.

🛠️ Proposed fix
 func (t *topologyPlugin) initializeTopologyTree(topologies []*kaiv1alpha1.Topology, nodes map[string]*node_info.NodeInfo) {
-	// Get VectorMap from any node (all share the same map)
-	var sharedVectorMap *resource_info.ResourceVectorMap
+	sharedVectorMap := resource_info.NewResourceVectorMap()
 	for _, nodeInfo := range nodes {
-		sharedVectorMap = nodeInfo.VectorMap
-		break
+		if nodeInfo != nil && nodeInfo.VectorMap != nil {
+			sharedVectorMap = nodeInfo.VectorMap
+			break
+		}
 	}

Also applies to: 75-75

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/scheduler/plugins/topology/topology_plugin.go` around lines 59 - 64, If
nodes can be empty, ensure sharedVectorMap is initialized to a non-nil default
instead of staying nil: after the loop that sets sharedVectorMap from
nodeInfo.VectorMap, check if sharedVectorMap == nil and, if so, create and
assign a new resource_info.ResourceVectorMap (or call the existing
constructor/helper) before it is used or assigned into topology trees
(references: sharedVectorMap, nodes, resource_info.ResourceVectorMap); this
prevents nil derefs when later converting/indexing vectors.

enoodle and others added 5 commits March 6, 2026 16:25
…eads to vectors

Convert all read-path methods from Resource to ResourceVector operations:
- NodeInfo: IsTaskAllocatable, FittingError, GetSumOfIdleGPUs, IsCPUOnlyNode
- External plugins: proportion, topology, nodeplacement, nodeavailability, resourcetype
- Framework: session logging, statement references
- Error handling: pod_errors, job_errors

Add AcceptedResourceVector to PodInfo. Add QuantifyVector util to proportion
plugin. Rewrite topology calcNodeAccommodation from iterative pod probing to
division-based vector approach.

Resource fields still maintained via dual-write for backward compatibility
until removal in subsequent commits.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Thread vectorMap through getJobRatioToFreeResources, sortTree,
  sortTreeFromRoot to avoid allocating a new ResourceVectorMap on
  every call in the hot scheduling path
- Reorder gpuIdx guard before gpuRequest extraction for clarity
- Improve comment explaining releasing memory handling in fractional
  GPU capacity calculation

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…sons

- Fix log messages in allocate.go that said "requires: %v GPUs" but now
  print a full resource vector
- Rewrite lessEqualVectorsExcludingGPU to iterate and skip the GPU index
  instead of mutating input vectors via save/zero/compare/restore, which
  is not safe under concurrent access

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@enoodle enoodle force-pushed the erez/migrate-node-info-pod-info-to-vectors branch from 4a7b05d to fb1e847 Compare March 6, 2026 16:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant